Current Issue : July - September Volume : 2015 Issue Number : 3 Articles : 6 Articles
We propose here a technique for automatic verification of software patches for user virtual environments on\nInfrastructure as a Service (IaaS) Cloud to reduce the cost of verifying patches. IaaS services have been spreading\nrapidly, and many users can customize virtual machines on IaaS Cloud like their own private servers. However, users\nmust install and verify software patches of the OS or middleware installed on virtual machines by themselves. This\ntask increases the user�s operation costs. Our proposed method replicates user virtual environments, extracts verification\ntest cases for user virtual environments from a test case database (DB), distributes patches to virtual machines in the\nreplicated environments, and executes the test cases automatically on the replicated environments. To reduce test cases\ncreation efforts, we propose an idea of two-tier abstraction which groups software to software groups and function\ngroups and selects test cases belonging to each group. We applied the proposed method on OpenStack using Jenkins\nand confirmed its feasibility. We evaluated the effectiveness of test case creation efforts and the automatic verification\nperformance of environment replications, test cases extractions, and test case executions....
Outsourcing data to external providers has gained momentum with the advent of cloud computing. Encryption\nallows data confidentiality to be preserved when outsourcing data to untrusted external providers that may be\ncompromised by attackers. However, encryption has to be applied in a way that still allows the external provider to\nevaluate queries received from the client. Even though confidential database-as-a-service (DaaS) is still an active field\nof research, various techniques already address this problem, which we call confidentiality preserving indexing\napproaches (CPIs). CPIs make individual tradeoffs between the functionality provided, i.e., the types of queries that can\nbe evaluated, the level of protection achieved, and performance.\nIn this paper, we present a taxonomy of requirements that CPIs have to satisfy in deployment scenarios including the\nrequired functionality and the required level of protection against various attackers. We show that the taxonomy�s\nunderlying principles serve as a methodology to assess CPIs, primarily by linking attacker models to CPI security\nproperties. By use of this methodology, we survey and assess ten previously proposed CPIs. The resulting CPI catalog\ncan help the reader who would like to build DaaS solutions to facilitate DaaS design decisions while the proposed\ntaxonomy and methodology can also be applied to assess upcoming CPI approaches....
In this paper, we introduce a model of task scheduling for a cloud-computing data center to analyze energy-efficient\ntask scheduling. We formulate the assignments of tasks to servers as an integer-programming problem with the\nobjective of minimizing the energy consumed by the servers of the data center. We prove that the use of a greedy\ntask scheduler bounds the constraint service time whilst minimizing the number of active servers. As a practical\napproach, we propose the most-efficient-server-first task-scheduling scheme to minimize energy consumption of\nservers in a data center. Most-efficient-server-first schedules tasks to a minimum number of servers while keeping the\ndata-center response time within a maximum constraint. We also prove the stability of most-efficient-server-first\nscheme for tasks with exponentially distributed, independent, and identically distributed arrivals. Simulation results\nshow that the server energy consumption of the proposed most-efficient-server-first scheduling scheme is 70 times\nlower than that of a random-based task-scheduling scheme....
We are too much dependent on computers and network these days and hence it has become important to provide security to computer and the network in order to protect our important data to get misused. Security of network is one of most important issues that have attracted a lot of researchers and development teams. Attackers can explore the weakness of a cloud system and compromise virtual machines in order to deploy Distributed Denial-of-Service (DDoS). DDoS attacks through the compromised zombies. In the cloud system, the detection of zombie exploration attacks is extremely difficult. This is because cloud users may install sensitive applications on their virtual machines. Therefore one need to haveefficient method for the detection of such compromised machines in network those are involved in doing the activities likespamming .In order to prevent exposed machines from being compromised in the cloud, here we offer a multi-phase distributed vulnerability detection, measurement, and countermeasure selection structure called NICE, which is built on attack graph based analytical models and reconfigurable virtual network-based countermeasures selected....
The foundation of Cloud Computing is sharing computing resources dynamically allocated and released per\ndemand with minimal management effort. Most of the time, computing resources such as processors, memory and\nstorage are allocated through commodity hardware virtualization, which distinguish cloud computing from others\ntechnologies. One of the objectives of this technology is processing and storing very large amounts of data, which\nare also referred to as Big Data. Sometimes, anomalies and defects found in the Cloud platforms affect the\nperformance of Big Data Applications resulting in degradation of the Cloud performance. One of the challenges in\nBig Data is how to analyze the performance of Big Data Applications in order to determine the main factors that\naffect the quality of them. The performance analysis results are very important because they help to detect the\nsource of the degradation of the applications as well as Cloud. Furthermore, such results can be used in future\nresource planning stages, at the time of design of Service Level Agreements or simply to improve the applications.\nThis paper proposes a performance analysis model for Big Data Applications, which integrates software quality\nconcepts from ISO 25010. The main goal of this work is to fill the gap that exists between quantitative (numerical)\nrepresentation of quality concepts of software engineering and the measurement of performance of Big Data\nApplications. For this, it is proposed the use of statistical methods to establish relationships between extracted\nperformance measures from Big Data Applications, Cloud Computing platforms and the software engineering\nquality concepts....
Efficient storage as a service feature provided by the cloud storage providers has exploited the way data is stored by\nmost of the users. Cloud storage providers offer a space on server that can be utilized by cloud users. Data grows at an\nimpressive rate per year but point to be focused is out of that most of data is copy. Although keeping multiple copies of data\nhelps to provide higher availability and long term durability of data, at this point of time the data redundancy is immoderate.\nData deduplication is a solution to tackle this redundancy of data. By keeping single copy of repeated data, data deduplication is\nconsidered a promising solution. This reduces the storage cost by eliminating duplicate data. Improve users experience by\nreducing upload time and saving network bandwidth. This paper presents method to deduplicate files in cloud storage at file\nlevel deduplication. This also presents possible threat of uncontrolled deletion of files by clients in previous approach and it\nsolution....
Loading....